161 research outputs found

    The ghosts of forgotten things: A study on size after forgetting

    Full text link
    Forgetting is removing variables from a logical formula while preserving the constraints on the other variables. In spite of being a form of reduction, it does not always decrease the size of the formula and may sometimes increase it. This article discusses the implications of such an increase and analyzes the computational properties of the phenomenon. Given a propositional Horn formula, a set of variables and a maximum allowed size, deciding whether forgetting the variables from the formula can be expressed in that size is DpD^p-hard in Σ2p\Sigma^p_2. The same problem for unrestricted propositional formulae is D2pD^p_2-hard in Σ3p\Sigma^p_3. The hardness results employ superredundancy: a superirredundant clause is in all formulae of minimal size equivalent to a given one. This concept may be useful outside forgetting

    The size of BDDs and other data structures in temporal logics model checking

    Get PDF
    Temporal Logic Model Checking is a verification method in which we describe a system, the model, and then we verify whether important properties, expressed in a temporal logic formula, hold in the system. Many Model Checking tools employ BDDs or some other data structure to represent sets of states. It has been empirically observed that the BDDs used in these algorithms may grow exponentially as the model and formula increase in size. We formally prove that no kind of data structure of polynomial size can represent the set of valid initial states for all models and all formulae. This result holds for all data structures where a state can be checked in polynomial time. Therefore, it holds not only for all types of BDDs regardless of variable ordering, but also for more powerful data structures, such as RBCs, MTBDDs, ADDs and SDDs. Thus, the size explosion of BDDs is not a limit of these specific data representation structures, but is unavoidable: every formalism used in the same way would lead to an exponential size blow up

    Natural revision is contingently-conditionalized revision

    Full text link
    Natural revision seems so natural: it changes beliefs as little as possible to incorporate new information. Yet, some counterexamples show it wrong. It is so conservative that it never fully believes. It only believes in the current conditions. This is right in some cases and wrong in others. Which is which? The answer requires extending natural revision from simple formulae expressing universal truths (something holds) to conditionals expressing conditional truth (something holds in certain conditions). The extension is based on the basic principles natural revision follows, identified as minimal change, indifference and naivety: change beliefs as little as possible; equate the likeliness of scenarios by default; believe all until contradicted. The extension says that natural revision restricts changes to the current conditions. A comparison with an unrestricting revision shows what exactly the current conditions are. It is not what currently considered true if it contradicts the new information. It includes something more and more unlikely until the new information is at least possible

    Belief revision by examples

    Get PDF
    A common assumption in belief revision is that the reliability of the information sources is either given, derived from temporal information, or the same for all. This article does not describe a new semantics for integration but the problem of obtaining the reliability of the sources given the result of a previous merging. As an example, the relative reliability of two sensors can be assessed given some certain observation, and allows for subsequent mergings of data coming from them

    Belief Integration and Source Reliability Assessment

    Get PDF
    Merging beliefs requires the plausibility of the sources of the information to be merged. They are typically assumed equally reliable when nothing suggests otherwise. A recent line of research has spun from the idea of deriving this information from the revision process itself. In particular, the history of previous revisions and previous merging examples provide information for performing subsequent merging operations. Yet, no examples or previous revisions may be available. In spite of the apparent lack of information, something can still be inferred by a try-and-check approach: a relative reliability ordering is assumed, the sources are integrated according to it and the result is compared with the original information. The final check may contradict the original ordering, like when the result of merging implies the negation of a formula coming from a source initially assumed reliable, or it implies a formula coming from a source assumed unreliable. In such cases, the reliability ordering assumed in the first place can be excluded from consideration. Such a scenario is proved real under the classifications of source reliability and definitions of belief integration considered in this article: sources divided in two, three or multiple reliability classes; integration is mostly by maximal consistent subsets but also weighted distance is considered. Other results mainly concern the integration by maximal consistent subsets and partitions of two and three reliability classes
    • …
    corecore